337 research outputs found

    Commissioning of an ultra-high dose rate pulsed electron beam medical LINAC for FLASH RT preclinical animal experiments and future clinical human protocols.

    Get PDF
    To present the acceptance and the commissioning, to define the reference dose, and to prepare the reference data for a quality assessment (QA) program of an ultra-high dose rate (UHDR) electron device in order to validate it for preclinical animal FLASH radiotherapy (FLASH RT) experiments and for FLASH RT clinical human protocols. The Mobetron <sup>®</sup> device was evaluated with electron beams of 9 MeV in conventional (CONV) mode and of 6 and 9 MeV in UHDR mode (nominal energy). The acceptance was performed according to the acceptance protocol of the company. The commissioning consisted of determining the short- and long-term stability of the device, the measurement of percent depth dose curves (PDDs) and profiles at two different positions (with two different dose per pulse regimen) and for different collimator sizes, and the evaluation of the variability of these parameters when changing the pulse width and pulse repetition frequency. Measurements were performed using a redundant and validated dosimetric strategy with alanine and radiochromic films, as well as Advanced Markus ionization chamber for some measurements. The acceptance tests were all within the tolerances of the company's acceptance protocol. The linearity with pulse width was within 1.5% in all cases. The pulse repetition frequency did not affect the delivered dose more than 2% in all cases but 90 Hz, for which the larger difference was 3.8%. The reference dosimetry showed a good agreement within the alanine and films with variations of 2.2% or less. The short-term (resp. long-term) stability was less than 1.0% (resp. 1.8%) and was the same in both CONV and UHDR modes. PDDs, profiles, and reference dosimetry were measured at two positions, providing data for two specific dose rates (about 9 Gy/pulse and 3 Gy/pulse). Maximal beam size was 4 and 6 cm at 90% isodose in the two positions tested. There was no difference between CONV and UHDR mode in the beam characteristics tested. The device is commissioned for FLASH RT preclinical biological experiments as well as FLASH RT clinical human protocols

    The Open Global Glacier Model (OGGM) v1.1

    Get PDF
    Despite their importance for sea-level rise, seasonal water availability, and as a source of geohazards, mountain glaciers are one of the few remaining subsystems of the global climate system for which no globally applicable, open source, community-driven model exists. Here we present the Open Global Glacier Model (OGGM), developed to provide a modular and open-source numerical model framework for simulating past and future change of any glacier in the world. The modeling chain comprises data downloading tools (glacier outlines, topography, climate, validation data), a preprocessing module, a mass-balance model, a distributed ice thickness estimation model, and an ice-flow model. The monthly mass balance is obtained from gridded climate data and a temperature index melt model. To our knowledge, OGGM is the first global model to explicitly simulate glacier dynamics: the model relies on the shallow-ice approximation to compute the depth-integrated flux of ice along multiple connected flow lines. In this paper, we describe and illustrate each processing step by applying the model to a selection of glaciers before running global simulations under idealized climate forcings. Even without an in-depth calibration, the model shows very realistic behavior. We are able to reproduce earlier estimates of global glacier volume by varying the ice dynamical parameters within a range of plausible values. At the same time, the increased complexity of OGGM compared to other prevalent global glacier models comes at a reasonable computational cost: several dozen glaciers can be simulated on a personal computer, whereas global simulations realized in a supercomputing environment take up to a few hours per century. Thanks to the modular framework, modules of various complexity can be added to the code base, which allows for new kinds of model intercomparison studies in a controlled environment. Future developments will add new physical processes to the model as well as automated calibration tools. Extensions or alternative parameterizations can be easily added by the community thanks to comprehensive documentation. OGGM spans a wide range of applications, from ice–climate interaction studies at millennial timescales to estimates of the contribution of glaciers to past and future sea-level change. It has the potential to become a self-sustained community-driven model for global and regional glacier evolution.</p

    Application of Plasticity Theory to Reinforced Concrete Deep Beams

    Get PDF
    yesThis paper reviews the application of the plasticity theory to reinforced concrete deep beams. Both the truss analogy and mechanism approach were employed to predict the capacity of reinforced concrete deep beams. In addition, most current codes of practice, for example Eurocode 1992 and ACI 318-05, recommend the strut-and-tie model for designing reinforced concrete deep beams. Compared with methods based on empirical or semi-empirical equations, the strut-and-tie model and mechanism analyses are more rational, adequately accurate and sufficiently simple for estimating the load capacity of reinforced concrete deep beams. However, there is a problem of selecting the effectiveness factor of concrete as reflected in the wide range of values reported in the literature for deep beams

    A small scale remote cooling system for a superconducting cyclotron magnet

    Get PDF
    Through a technology transfer program CERN is involved in the R&D of a compact superconducting cyclotron for future clinical radioisotope production, a project led by the Spanish research institute CIEMAT. For the remote cooling of the LTc superconducting magnet operating at 4.5 K, CERN has designed a small scale refrigeration system, the Cryogenic Supply System (CSS). This refrigeration system consists of a commercial two-stage 1.5 W @ 4.2 K GM cryocooler and a separate forced flow circuit. The forced flow circuit extracts the cooling power of the first and the second stage cold tips, respectively. Both units are installed in a common vacuum vessel and, at the final configuration, a low loss transfer line will provide the link to the magnet cryostat for the cooling of the thermal shield with helium at 40 K and the two superconducting coils with two-phase helium at 4.5 K. Currently the CSS is in the testing phase at CERN in stand-alone mode without the magnet and the transfer line. We have added a "validation unit" housed in the vacuum vessel of the CSS representing the thermo-hydraulic part of the cyclotron magnet. It is equipped with electrical heaters which allow the simulation of the thermal loads of the magnet cryostat. A cooling power of 1.4 W at 4.5 K and 25 W at the thermal shield temperature level has been measured. The data produced confirm the design principle of the CSS which could be validated

    The SEC\u27s Misguided Climate Disclosure Rule Proposal

    Get PDF
    The following article adapts and consolidates two comment letters submitted last spring by a group of twenty-two professors of finance and law on the SEC’s proposed climate change disclosure rules. The professors reiterate their recommendation that the SEC withdraw its proposal as legally misguided, while outlining some of the issues that the proposal will face when challenged in court

    INDIGO-DataCloud: a Platform to Facilitate Seamless Access to E-Infrastructures

    Get PDF
    [EN] This paper describes the achievements of the H2020 project INDIGO-DataCloud. The project has provided e-infrastructures with tools, applications and cloud framework enhancements to manage the demanding requirements of scientific communities, either locally or through enhanced interfaces. The middleware developed allows to federate hybrid resources, to easily write, port and run scientific applications to the cloud. In particular, we have extended existing PaaS (Platform as a Service) solutions, allowing public and private e-infrastructures, including those provided by EGI, EUDAT, and Helix Nebula, to integrate their existing services and make them available through AAI services compliant with GEANT interfederation policies, thus guaranteeing transparency and trust in the provisioning of such services. Our middleware facilitates the execution of applications using containers on Cloud and Grid based infrastructures, as well as on HPC clusters. Our developments are freely downloadable as open source components, and are already being integrated into many scientific applications.INDIGO-Datacloud has been funded by the European Commision H2020 research and innovation program under grant agreement RIA 653549.Salomoni, D.; Campos, I.; Gaido, L.; Marco, J.; Solagna, P.; Gomes, J.; Matyska, L.... (2018). INDIGO-DataCloud: a Platform to Facilitate Seamless Access to E-Infrastructures. Journal of Grid Computing. 16(3):381-408. https://doi.org/10.1007/s10723-018-9453-3S381408163García, A.L., Castillo, E.F.-d., Puel, M.: Identity federation with VOMS in cloud infrastructures. In: 2013 IEEE 5Th International Conference on Cloud Computing Technology and Science, pp 42–48 (2013)Chadwick, D.W., Siu, K., Lee, C., Fouillat, Y., Germonville, D.: Adding federated identity management to OpenStack. Journal of Grid Computing 12(1), 3–27 (2014)Craig, A.L.: A design space review for general federation management using keystone. In: Proceedings of the 2014 IEEE/ACM 7th International Conference on Utility and Cloud Computing, pp 720–725. IEEE Computer Society (2014)Pustchi, N., Krishnan, R., Sandhu, R.: Authorization federation in iaas multi cloud. In: Proceedings of the 3rd International Workshop on Security in Cloud Computing, pp 63–71. ACM (2015)Lee, C.A., Desai, N., Brethorst, A.: A Keystone-Based Virtual Organization Management System. In: 2014 IEEE 6Th International Conference On Cloud Computing Technology and Science (Cloudcom), pp 727–730. IEEE (2014)Castillo, E.F.-d., Scardaci, D., García, A.L.: The EGI Federated Cloud e-Infrastructure. Procedia Computer Science 68, 196–205 (2015)AARC project: AARC Blueprint Architecture, see https://aarc-project.eu/architecture . Technical report (2016)Oesterle, F., Ostermann, S., Prodan, R., Mayr, G.J.: Experiences with distributed computing for meteorological applications: grid computing and cloud computing. Geosci. Model Dev. 8(7), 2067–2078 (2015)Plasencia, I.C., Castillo, E.F.-d., Heinemeyer, S., García, A.L., Pahlen, F., Borges, G.: Phenomenology tools on cloud infrastructures using OpenStack. The European Physical Journal C 73(4), 2375 (2013)Boettiger, C.: An introduction to docker for reproducible research. ACM SIGOPS Operating Systems Review 49(1), 71–79 (2015)Docker: http://www.docker.com (2013)Gomes, J., Campos, I., Bagnaschi, E., David, M., Alves, L., Martins, J., Pina, J., Alvaro, L.-G., Orviz, P.: Enabling rootless linux containers in multi-user environments: the udocker tool. Computing Physics Communications. https://doi.org/10.1016/j.cpc.2018.05.021 (2018)Zhang, Z., Chuan, W., Cheung, D.W.L.: A survey on cloud interoperability taxonomies, standards, and practice. SIGMETRICS perform. Eval. Rev. 40(4), 13–22 (2013)Lorido-Botran, T., Miguel-Alonso, J., Lozano, J.A.: A Review of Auto-scaling Techniques for Elastic Applications in Cloud Environments. Journal of Grid Computing 12(4), 559–592 (2014)Nyrén, R., Metsch, T., Edmonds, A., Papaspyrou, A.: Open Cloud Computing Interface–Core. Technical report, Open Grid Forum (2010)Metsch, T., Edmonds, A.: Open Cloud Computing Interface-Infrastructure. Technical report, Open Grid Forum (2010)Metsch, T., Edmonds, A.: Open Cloud Computing Interface-RESTful HTTP Rendering. Technical report, Open Grid Forum (2011)(Ca Technologies) Lipton, P., (Ibm) Moser, S., (Vnomic) Palma, D., (Ibm) Spatzier, T.: Topology and Orchestration Specification for Cloud Applications. Technical report, OASIS Standard (2013)Teckelmann, R., Reich, C., Sulistio, A.: Mapping of cloud standards to the taxonomy of interoperability in IaaS. In: Proceedings - 2011 3rd IEEE International Conference on Cloud Computing Technology and Science, CloudCom 2011, pp 522–526 (2011)García, A.L., Castillo, E.F.-d., Fernández, P.O.: Standards for enabling heterogeneous IaaS cloud federations. Computer Standards & Interfaces 47, 19–23 (2016)Caballer, M., Zala, S., García, A.L., Montó, G., Fernández, P.O., Velten, M.: Orchestrating complex application architectures in heterogeneous clouds. Journal of Grid Computing 16 (1), 3–18 (2018)Hardt, M., Jejkal, T., Plasencia, I.C., Castillo, E.F.-d., Jackson, A., Weiland, M., Palak, B., Plociennik, M., Nielsson, D.: Transparent Access to Scientific and Commercial Clouds from the Kepler Workflow Engine. Computing and Informatics 31(1), 119 (2012)Fakhfakh, F., Kacem, H.H., Kacem, A.H.: Workflow Scheduling in Cloud Computing a Survey. In: IEEE 18Th International Enterprise Distributed Object Computing Conference Workshops and Demonstrations (EDOCW), 2014, Vol. 71, pp. 372–378. Springer, New York (2014)Stockton, D.B., Santamaria, F.: Automating NEURON simulation deployment in cloud resources. Neuroinformatics 15(1), 51–70 (2017)Plóciennik, M., Fiore, S., Donvito, G., Owsiak, M., Fargetta, M., Barbera, R., Bruno, R., Giorgio, E., Williams, D.N., Aloisio, G.: Two-level Dynamic Workflow Orchestration in the INDIGO DataCloud for Large-scale, Climate Change Data Analytics Experiments. Procedia Computer Science 80, 722–733 (2016)Moreno-Vozmediano, R., Montero, R.S., Llorente, I.M.: Multicloud deployment of computing clusters for loosely coupled mtc applications. IEEE transactions on parallel and distributed systems 22(6), 924–930 (2011)Katsaros, G., Menzel, M., Lenk, A.: Cloud Service Orchestration with TOSCA, Chef and Openstack. In: Ic2e (2014)Garcia, A.L., Zangrando, L., Sgaravatto, M., Llorens, V., Vallero, S., Zaccolo, V., Bagnasco, S., Taneja, S., Dal Pra, S., Salomoni, D., Donvito, G.: Improved Cloud resource allocation: how INDIGO-DataCloud is overcoming the current limitations in Cloud schedulers. J. Phys. Conf. Ser. 898(9), 92010 (2017)Singh, S., Chana, I.: A survey on resource scheduling in cloud computing issues and challenges. Journal of Grid Computing, pp. 1–48 (2016)García, A.L., Castillo, E.F.-d., Fernández, P.O., Plasencia, I.C., de Lucas, J.M.: Resource provisioning in Science Clouds: Requirements and challenges. Software: Practice and Experience 48(3), 486–498 (2018)Chauhan, M.A., Babar, M.A., Benatallah, B.: Architecting cloud-enabled systems: a systematic survey of challenges and solutions. Software - Practice and Experience 47(4), 599–644 (2017)Somasundaram, T.S., Govindarajan, K.: CLOUDRB A Framework for scheduling and managing High-Performance Computing (HPC) applications in science cloud. Futur. Gener. Comput. Syst. 34, 47–65 (2014)Sotomayor, B., Keahey, K., Foster, I.: Overhead Matters: A Model for Virtual Resource Management. In: Proceedings of the 2nd International Workshop on Virtualization Technology in Distributed Computing SE - VTDC ’06, p 5. IEEE Computer Society, Washington (2006)SS, S.S., Shyam, G.K., Shyam, G.K.: Resource management for Infrastructure as a Service (IaaS) in cloud computing SS Manvi A survey. J. Netw. Comput. Appl. 41, 424–440 (2014)INDIGO-DataCloud consortium: Initial requirements from research communities - d2.1, see https://www.indigo-datacloud.eu/documents/initial-requirements-research-communities-d21 https://www.indigo-datacloud.eu/documents/initial-requirements-research-communities-d21 https://www.indigo-datacloud.eu/documents/initial-requirements-research-communities-d21 . Technical report (2015)Europen open science cloud: https://ec.europa.eu/research/openscience (2015)Proot: https://proot-me.github.io/ (2014)Runc: https://github.com/opencontainers/runc (2016)Fakechroot: https://github.com/dex4er/fakechroot (2015)Pérez, A., Moltó, G., Caballer, M., Calatrava, A.: Serverless computing for container-based architectures Future Generation Computer Systems (2018)de Vries, K.J.: Global fits of supersymmetric models after LHC run 1. Phd thesis Imperial College London (2015)Openstack: https://www.openstack.org/ (2015)See http://argus-documentation.readthedocs.io/en/stable/argus_introduction.html (2017)See https://en.wikipedia.org/wiki/xacml (2013)See http://www.simplecloud.info (2014)Opennebula: http://opennebula.org/ (2018)Redhat openshift: http://www.opencityplatform.eu (2011)The cloud foundry foundation: https://www.cloudfoundry.org/ (2015)Caballer, M., Blanquer, I., Moltó, G., de Alfonso, C.: Dynamic management of virtual infrastructures. Journal of Grid Computing 13(1), 53–70 (2015)See http://www.infoq.com/articles/scaling-docker-with-kubernetes http://www.infoq.com/articles/scaling-docker-with-kubernetes (2014)Prisma project: http://www.ponsmartcities-prisma.it/ (2010)Opencitiy platform: http://www.opencityplatform.eu (2014)Onedata: https://onedata.org/ (2018)Dynafed: http://lcgdm.web.cern.ch/dynafed-dynamic-federation-project http://lcgdm.web.cern.ch/dynafed-dynamic-federation-project (2011)Fts3: https://svnweb.cern.ch/trac/fts3 (2011)Fernández, P.O., García, A.L., Duma, D.C., Donvito, G., David, M., Gomes, J.: A set of common software quality assurance baseline criteria for research projects, see http://hdl.handle.net/10261/160086 . Technical reportHttermann, M.: Devops for developers Apress (2012)EOSC-Hub: ”Integrating and managing services for the European Open Science Cloud” Funded by H2020 research and innovation pr ogramme under grant agreement No. 777536. See http://eosc-hub.eu (2018)Apache License: author = https://www.apache.org/licenses/LICENSE-2.0 (2004)INDIGO Package Repo: http://repo.indigo-datacloud.eu/ (2017)INDIGO DockerHub: https://hub.docker.com/u/indigodatacloud/ https://hub.docker.com/u/indigodatacloud/ (2015)Indigo gitbook: https://indigo-dc.gitbooks.io/indigo-datacloud-releases https://indigo-dc.gitbooks.io/indigo-datacloud-releases (2017)Van Zundert, G.C., Bonvin, A.M.: Disvis: quantifying and visualizing the accessible interaction space of distance restrained biomolecular complexes. Bioinformatics 31(19), 3222–3224 (2015)Van Zundert, G.C., Bonvin, A.M.: Fast and sensitive rigid–body fitting into cryo–em density maps with powerfit. AIMS Biophys. 2(0273), 73–87 (2015
    corecore